Christina's LIS Rant
What types of expertise do librarians have?
In trying to define exploratory search for a current project, I've been confronted with a few different types of expertise. Marchionini (1995) describes 3 types:
- domain
- systems (how to actually use the search interface)
- information seeking (more on the structure of information and how to construct searches, etc.)
Apparently domain experts are more likely to go for higher recall because they will be able to browse the results more efficiently (a slightly different explanation from what they themselves report, see my summary of the
RIN paper). Librarians are more likely to do analytic searches with more precise concept mapping.
Anyway, I ran across Collins' (2004) description of "interactional expertise". He talks about a sort of middle ground between tacit and explicit knowledge where you can communicate in the language of the domain, but can't practice the activity. (See, I've always thought of this as the stuff that comes out of the back of an intact male bovine creature, which I'm actually fairly decent at).
Basically he says that through linguistic socialization while immersed in the community we pick up tacit knowledge but not the practical skill to "pass as a fully competent member of the form of life once we move beyond language" (p. 127)
Collins goes on to talk about how we can know the differences: contributory knowledge lets you be let loose in the lab and hold your own, where interactional knowledge lets you interact with practitioners by understanding their terminology and getting some of their references. He states that this changes the interview to a conversation when the participant understands that you are able to convey what they are doing.
His cases primarily center on what sociologists of science or knowledge do, but this does have some explanatory power for what librarians do. I've been asked, "do you really have to have a science degree to do what you do?" The answer, of course, is no... but there is some common ground there and the negotiation of the reference question is very much aided by having a bit of domain knowledge. It could be that this is overcome by non-science trained folks through this linguistic socialization. In fact, science folks are all lay people outside of their particular area so all librarians are faced with this.
How can this expertise be built besides immersion (which will be time consuming and also requires the cooperation of the scientist participants)? Reading, observing... hmm.
Also, does this add anything to other descriptions of this phenomenon provided by Clark & Brennan (1993) and others discussing grounding in communication? (well I suppose grounding is an active, interactive process while expertise is a state so... )
Update: actually, this is probably more important when we're judging relevance for our customers when we are acting as search intermediaries... hm..
Clarke, H. H., & Brennan, S. E. (1993). Grounding in communication. In R. M. Baecker (Ed.), Readings in groupware and computer-supported cooperative work: Assisting human-human collaboration (pp. 222-233). San Mateo, CA: Morgan Kaufmann Publishers.Collins, H. (2004). Interactional expertise as a third kind of knowledge. Phenomenology and the Cognitive Sciences, 3, 125-143.Marchionini, G. (1995). Information seeking in electronic environments. Cambridge ; New York: Cambridge University Press.
Longino: Theoretical Pluralism in the Sciences of Human Behavior
I attended this colloquium today in the Philosophy department at Maryland.
Dr. Longino is now at Stanford (although I believe very recently).
She spoke about the different ways scientists study the
causes of aggression. She has been studying 5 fields:
- quantitative/behavior/population genetics
- molecular genetics
- neurophysiological
- social/environmental
- developmental systems theory (which is attractive to me, but to no one in the room who knew anything about it so?)
She says this is pluralist because they all develop research questions and methods specific to their own work while either holding constant, neglecting, ignoring (as inactive or causally irrelevant), or treating as noise influences from the other sciences.
She discussed these "causal spaces"
gene | genome | intrauterine | physiology | non-shared spaces (e.g., birth order) | shared spaces (intra family) | social/economic |
From this view, you can't integrate these into a whole picture -- can't compare one space to another. She recommends for STS to appreciate the partiality of each with distinctive approaches - it's not helpful to compare. She believes knowledge is through social mechanisms as well as through scientific investigations.
An audience member suggested that "mechanisms" is more appropriate than "causes" and that scientists who operate in these spaces appreciate the other spaces...
Other audience members expressed incredulity that there really are monists who believe their way accounts for all variation.
A thought expressed by a scientist in the audience was whether this is destined to always be pluralist or if they are breaking down an otherwise intractable problem and that eventually these methods will converge...
Labels: philosophy of science
If "notes" or "letters" take a year to publish, do scientists have an obligation to self-archive?
In many established science journals, there's a letters or notes section which is less thoroughly peer-reviewed, is some what preliminary, short, and is supposed to discuss new developments and preliminary reports of completed work. Some organizations have publications that consist only of these things.
Ok, so I was doing a little search today and ran across a "note" from the current issue of
Journal of Thermodynamics and Heat Transfer. Woo-hoo, I thought... this must be new stuff. Cutting edge and all that.
So the "note" was received Jan 2006 and accepted April 2006. What? That's craziness. No doubt the in-group - the peers of the researchers and the more prestigious in that field - know about the work and have received updates through informal channels. The rest of the world, however, needs to hear about it in a journal.
I haven't looked into the policies of this particular publication -- but I think that you should only have an embargo from the date of acceptance, not the date of publication. Aren't some of the SAMPE journals like 2-3 years behind? I also think that the right thing to do is to self-archive the article with the citation and the accepted copy if the journal won't publish it for a year.
For publishers that put things online in advance of print this isn't really a big deal, of course, unless you're from another local institution here that gets journals only through database aggregators (w/12 month embargoes) and don't have access to the journal home pages -- but I digress.
Labels: science communication, science publishing, self archiving
Science Blogging, Public Communication of Science, and Public Intellectuals
I've spent all day (literally) working on a literature review for an upcoming (hopefully) study of how and why scientists use blogs. I'm going back through previous research on blogs in general, personal web pages of scientists, informal scholarly communication, personal information management, and public communication of science. All fascinating stuff -- that's the problem, really, that I keep getting lost and wandering off :) Now I'm just fried so I have to go work more on
IR stuff (ha!).
Anyway, to the point(s). First, Lamb and Davidson (2005) found that senior researchers don't pay attention or really worry much about web presence -- they want to be known by their publication record and sort of think their time could be better spent. They found that junior researchers searched for people's web pages and also maintained up to date and detailed personal web pages. This is interesting given Barjak's (2006) findings that senior researchers and those who were more productive used the web more. I think there are some details there to tease out.
My other point is how much I enjoyed reading Cohen (2006) and Gregg (2006).
Cohen (2006) discusses the criticism of blogs as narcissistic and/or as (poor/new/revolutionary) journalism. He argues that blogs are an emerging thing which is "subjected to the gravity of the familiar" - IOW that they really are different and the views of 'public' is different but they are judged as if they are trying to be something else. They straddle this public-private thing... well if you're interested, you had better read the paper, because my toasted brain can't do it justice right now. (of course, your library has to have a subscription to the journal for you to read it)
Gregg (2006) brings up how scholars blogging make the work life of the scholar banal and ordinary (her words). This struck me as Kyvik (2005) was saying that one of the reasons for public communication of science is to make visible the invisible work of the scientist - non-scientists don't know what scientists do so may be less pro-science and less likely to fund science. Anyway, she also talked about institutional constraints and views of "professionalism" getting in the way of public communication of scholarship and public intellectualism (I went on a whole long side trip there looking for more information on the exact definition of public intellectual and getting involved in French history....) I guess in her field, cultural studies, there's been a lot of criticism about not being "out there" enough. Also, lots of really good stuff here, which I am, at this point, incapable of summarizing.
Barjak, F. (2006). The role of the internet in informal scholarly communication. Journal of the American Society for Information Science and Technology, 57(10), 1350-1367.
Cohen, K. R. (2006). A welcome for blogs. Continuum: Journal of Media & Cultural Studies, 20(2), 161-173.
Gregg, M. (2006). Feeling ordinary: Blogging as conversational scholarship. Continuum: Journal of Media & Cultural Studies, 20(2), 147-160.
Kyvik, S. (2005). Popular science publishing and contributions to public discourse among university faculty. Science Communication, 26, 288-311.
Lamb, R., & Davidson, E. (2005). Information and communication technology challenges to scientific professional identity. Information Society, 21(1), 1-24.
Labels: science blogging, science communication
Eigen factor?
This post is a bit of a place holder for more thought (and a way to not be working on my, oh, 3 research papers and one white paper for work...).
The Bergstrom lab in the Department of Biology at the University of Washington has developed a new metric for journal impact roughly based on
eigenvector centrality in SNA (note: linked description needs to be read continuously for an hour to make sense). The
math bit for this measure is actually understandable so read it if you get a chance.
Odds that I'll ever get back to this are slim so .. this could make a slight bit of sense in the Ellis-chaining-sense...
The categories are screwy, but they come directly from JCR, I think so this project isn't to be judged on that.
I wonder if this can be gamed by journal editors requiring authors to cite articles in their journals (in order to get published in that same journal?). The web commentary seems to be "discovering" the many pitfalls of citation analysis so this is probably a good thing either way.
Labels: bibliometrics, citation analysis, impact factor
FiveBlogs
5 non-library blogs I read...
- Planetary Society- Clear, easy to read, interesting, enthusiastic and lots of New Horizons coverage!
- Cosmic Variance (but who doesn't?)
- MAKE Magazine - Too much, too much, but I love it anyway and my library finally got our first copy of our new subscription!
- Mathemagenic - except for I think she's on maternity leave so I guess the doctoral work is on a brief pause :)
- I check in (a lot) at Scienceblogs.com (and now scientificblogging.com, too) to see what's up. I really enjoyed meeting a lot of the scienceblogs folks at the conference so I like to see what they're up to. Plus it's always interesting.
I also read a bunch of KM blogs, but they're library & information science to me so I'm not putting them in this very short list. There are some other 3.0 - social media - information sciencey types who I read regularly, too, but gee, my blog subs are public on bloglines, you can see for yourself if you like!
Labels: fiveblogs
But I don't want it in my e-mail...
Amazon just introduced a monthly "podcast" (you'll see why the quotes shortly) with editor's picks of the 7 most important books of the season (Significant Seven). This sounds like a reasonable thing and probably pretty good marketing. Since I still occasionally substitute at the public library and it's hard for me to keep up with what's hot in popular literature, this sounded like something that I might add to my podcatcher... except... wait, there's no feed?!? Really?
I couldn't visually detect the feed and firefox couldn't auto-detect it... Of course I don't want this stupid thing in my in box.
Usually this company gets it, but this seems a bit bizarre. If you want to tell me something regularly in the hopes that I'll buy something, you'd better have a feed. If you have audio files you'd like me to listen to, you'd better have a feed that I can put in my podcatcher. Customizing your page if you're a major search engine makes no difference to me, because I search almost exclusively with the search box in my browser or by using conquery. I guess I'm not typical -- at work my start page is the intranet home page (important) and at home it's blank.
Update 5/14:
They're now real podcasts. (via
ResourceShelf)
It's about trust, reliability, accuracy ...
Stacy on
Web4Lib reported that a vendor provided obviously bad usage statistics (reporting usages of databases she doesn't have access to) and when questioned, they said simply that it was a "known issue" and that they would *hide* those stats from her. ARGH! So she apparently tried a few different ways to tell them that hiding-omitting-scrubbing bad stats isn't really the answer she needs. Then they said "trust me" about the rest of the statistics!!! Wha?
She also had this problem with another vendor:
This isn't the only vendor that provided inaccurate usage data this year.
Another vendor's statistics showed zero usage after our subscription
started. Since I had used it at the beginning of the subscription period,
it was clear something was wrong. When this anomaly was reported, the
vendor never explained what the problem was, but sent us some
completely different (and - surprise! - much higher) usage figures.
So, we're supposed to be using more scientific ways of evaluating our collection (usage factors, impact factors, weighting systems...) and being able to justify costs based on use and need. In fact, some would like to completely remove the professional judgment of the librarian and use only usage metrics (not MPOW, btw, we're sensible). If this is the crap we allow from our vendors, there's no way we can do these evaluations systematically and have them have any relationship to reality. (I'm not criticizing Stacy -- she's obviously not allowing this but fighting it!)
This vendor in question is specifically listed in the
COUNTER documents so should have completed an audit by a CPA. These audits are supposed to help with credibility, reliability, and validity. Maybe the higher ups at the company don't know what their help desk people are telling librarians who call?
Update 3/26/2007: Since this post, I have been in contact with Michael Gorrell, Senior Vice President, Chief Information Officer and Kate Hanson of EBSCO. They have been doing some in-depth troubleshooting and have found a problem in an algorithm that led to the errant usage report. See their support site for more information:
http://support.ebsco.com/support_news/detail.php?id=341&t=h. As I said in a private e-mail to Mr. Gorrell- my intent was not to criticize EBSCO directly as they do seem to be cognizant of the importance (generally) and are working to fix the problem (ever since it got escalated from the original help desk person). Once again, libraries are very heavily reliant on vendor-provided statistics for decision making which is a bit scary. Also Mr. Gorrell corrected an assumption I made -- COUNTER audits are for content and formatting, not for accuracy.... sigh.
Labels: library management, metrics, usage statistics
Blogger meetup at ACRL?
Beatrice and I were chatting about having a blogger meetup in Baltimore during
ACRL. She's going to put it out to mailing lists so keep an eye out. I'll update this if/when time/place are decided.
Labels: acrl2007
Reading Star... I want to go classify something!
I've always appreciated the taxonomists, catalogers, indexers and those who catalog and classify information so that it can be found. I remember turning in my little musical instrument thesaurus in Organizing Information class and feeling a bit gleeful...(mostly to have it done and that I knew a flute is a woodwind). For the most part, though, I have stayed as far away from this stuff as possible and left it to the people who do it well. Now I'm reading some things by Susan Leigh Star and I'm about ready to go out and classify something (is this one of
Mark's intellectual crushes?). It started with:
Star, S. L. (1998). Grounded classification: Grounded theory and faceted classification. Library Trends, 47(2), 218.
now I'm perusing:
Star, S.L. (1999). The ethnography of infrastructure. The American Behavioral Scientist, 43(3), 377.
How can you not love a woman who compares something with the phone book and determines that the phonebook has better narrative structure? :)
I think it is that she writes well and obviously really enjoys her work. Interestingly, she's a sociologist who studied with Strauss. Too much to read, too little time...